Automatically classifying the tissues types of Region of Interest (ROI) inmedical imaging has been an important application in Computer-Aided Diagnosis(CAD), such as classification of breast parenchymal tissue in the mammogram,classify lung disease patterns in High-Resolution Computed Tomography (HRCT)etc. Recently, bag-of-features method has shown its power in this field,treating each ROI as a set of local features. In this paper, we investigateusing the bag-of-features strategy to classify the tissue types in medicalimaging applications. Two important issues are considered here: the visualvocabulary learning and weighting. Although there are already plenty ofalgorithms to deal with them, all of them treat them independently, namely, thevocabulary learned first and then the histogram weighted. Inspired byAuto-Context who learns the features and classifier jointly, we try to developa novel algorithm that learns the vocabulary and weights jointly. The newalgorithm, called Joint-ViVo, works in an iterative way. In each iteration, wefirst learn the weights for each visual word by maximizing the margin of ROItriplets, and then select the most discriminate visual words based on thelearned weights for the next iteration. We test our algorithm on three tissueclassification tasks: identifying brain tissue type in magnetic resonanceimaging (MRI), classifying lung tissue in HRCT images, and classifying breasttissue density in mammograms. The results show that Joint-ViVo can performeffectively for classifying tissues.
展开▼